Jul
4
Written by:
QCTO Blog
Saturday, July 04, 2009
Written by Finn Ellebaek Nielsen
In this blog post I'll suggest various best practices for establishing a
test policy, as well as a test strategy for Oracle code. By test policy
I refer to the high-level approach towards testing. A test strategy on
the other hand give the rules for testing in more detail. They must fit
into the overall test policy/strategy that cover all the tiers and sites
in the application architecture, so in terms of the bigger picture the
test policy/strategy for Oracle code are partial.
Introduction
I'll focus on the following very different circumstances:
- Legacy project: Typically an old, tangled, poorly documented system with little or no automated testing.
- Greenfield project: New development from scratch.
As always with these blog posts, my focus is on the business logic
defined through PL/SQL programs, triggers, views, materialized views etc
(collaterally referred to as Oracle code) stored in the Oracle
database.
Test Policy
I've given many arguments for testing Oracle code in the database in
my previous blog post "Reasons for Testing Oracle Code in the Database"
so it should come as no surprise that my suggestion for a partial test
policy is:
- Automate unit and integration testing of Oracle code as much as possible, and run those tests within the database.
- The Oracle developers carry out the test design (with potential help from the system testers) and test implementation.
- Everybody on the team can and should execute the automated tests.
With the test policy in place, let's look at how to establish partial
test strategies under the two different situations mentioned above.
Legacy
This is the probably the most common circumstance you'll encounter.
Development teams have been building and deploying successful,
mission-critical applications based on PL/SQL and SQL for more than 15
years. That amounts to a lot of legacy code, some of it no doubt easy to understand and maintain, most of it spaghetti code.
Challenges
You likely face challenges in the following areas as you consider putting together a test policy for your legacy application:
- Source code size: Thousands of lines of PL/SQL code and numerous
complex views, some of which are based on each other at multiple
levels.
- Documentation: The documentation is in a poor state and hasn't been updated for a long time.
- Dependencies: A large set of PL/SQL packages have been developed
without clear separation and interfaces -- they are heavily
intra-dependent. This means that it's difficult to test them at the unit
test level, one by one, as each unit requires specific test data, often
the output of another subprogram, in order to work correctly.
- Testability: Large amounts of up to date production data is used
for manual testing; no effort has been made towards fabricating
specific and limited test data. The code is developed in a way so that
it's not easily testable. It may need refactoring, which is a risk in
itself as you're then introducing changes to the existing code without
being able to see whether you're also introducing defects.
- Tests: There are no (or hardly any) standardized unit tests. A
few developers have developed proprietary scripts of which the Code
Coverage is poor and they only run on specific PCs owing to hardcoded
file path names and inherent assumptions about program settings (eg
SQL*Plus or Toad). Furthermore, the test scripts also assume that test
data is set up in a specific way with hard coded primary keys.
- Unfamiliar code: The original developers no longer work for the
company. The new developers don't like to refactor the source code as
they're afraid of breaking it owing to the lack of documentation and
automated tests.
- Skepticism: Developers are often skeptical about automated unit
testing. It is not uncommon to hear: "Why do we have to test our code?
That's what we have testers for".
Test Strategy
The above mentioned challenges likely seem overwhelming and maybe
even insurmountable. Before you give up on the idea of improved testing,
however, I suggest that you design strategies that can address these
challenges in a manageable way. You can apply one (or two if they're not
mutually exclusive) of the following different test strategies to your
legacy project:
- Big bang: Rip x months out of the calendar and assign all the
developers on the team (apart from a few assigned to production support)
to implement fine-grained unit tests to the entire Oracle code base.
- Incremental, top-down: Introduce high level regression tests
covering the units that are most critical in the system and are changed
most frequently. For instance, in a system with a complex set of nested
views based on a complex set of PL/SQL packages automated testing could
be introduced for one high level view at a time.
- Incremental, bottom-up: Introduce low-level unit tests and
medium-level integration tests covering the units that are most critical
in the system and are changed most frequently. In the above mentioned
example system, automated testing could be introduced for one PL/SQL
package at a time, perhaps for only a few procedures/functions in that
package at a time if the package is very large.
The big bang approach is difficult to implement successfully because of the following factors:
- The feedback in terms of improved software quality is too long
away so it's difficult for the developers to keep the optimism and pace.
- Users get frustrated that they won't get any new features for x months.
- It's expensive, since time is spent constructing tests that cover units which may never even change.
Both the incremental options should be expanded over time, as a need
arises to change a unit that isn't already covered by automated tests.
Over time, the incremental approaches will end up covering large parts
of the system, with a heavy emphasis on those programs that are
maintained (and therefore likely used) most often. If the bottom-up
approach is difficult owing to internal code dependencies and lack of
testability, consider following the top-down path initially and then
introduce bottom-up over time.
The incremental options are appealing because they are implemented
gradually, allowing everybody to see the positive effects sooner rather
than later. The Return On Investment from the incremental options is
relatively quick, especially if the system was previously tested
manually (time consuming and error prone) for each deployment (every 2-4
weeks in many Agile projects).
The test cases should be based on a mix of system requirements (they
have most likely changed during the project's life time), historical
defects reported, and a consolidation of any developer-specific test
scripts that have been created over time. The testers should help the
developers design the tests, leveraging their expertise (especially when
it comes to brainstorming test scenarios, which we have found many
developers to not be very good at).
Choosing the right strategies depends on various factors, such as:
- Time: How long can the users/customers wait for the next release
with critical new features? If you're in a competitive situation, how
long can you afford to wait with that new release with killer features
that will increase your market share? More testing on a legacy project
will inevitably cause delays in terms of delivering new/changed
functionality in the beginning (especially if you choose the big bang
path), but soon you'll earn this delay and much more back, as described
in my previous blog post "Reasons for Testing Oracle Code in the
Database".
- Resources: How many resources do you have available, both in
terms of people and budget? What is the cost involved with each critical
production bug and what about the customer's/user's trust in you? How
do we convince our developers that proactive, thorough testing is a good
idea?
- Risk: What is the risk involved in not having automated tests in
place? How critical is it that a bug is found in the production
system(s)? Are people's lives at risk?
- Break/even: Where is the break/even between the time and money
you spend on automated testing and the corresponding improvement in
software quality?
- Return On Investment: What will provide you the best ROI observing the other factors?
In reality, a combination of the incremental options over time will probably be your best bet.
Another factor is how much testing effort is required and I'll discuss this in more detail in a future blog post.
Greenfield
Enough about legacy projects -- let's turn to greenfield
projects: brand, spanking new application development! You're very
fortunate if you're in this position, since (among other great benefits)
you can design and implement your code for testability, right from the start.
You should seriously consider applying Test-Driven Development (TDD),
through which you let the requirements drive the test cases and in turn
use these test cases to drive the programming (in other words, test
cases become the to-do list for implementation). This ensures a good
level of testing and also that you only implement what's required
instead of thinking about and building what might be useful for some future set of requirements (and often never happens).
TDD doesn't require you to change your entire project methodology to
be Agile. So how could you apply TDD to Oracle code with Quest Code
Tester for Oracle?
- Develop the signature of the program/subprogram if it doesn't
exist already. This is not strictly correct according to TDD but you
need the signature in place in order for Quest Code Tester for Oracle to
see the declaration and allow you to create a test definition against
it. In strict TDD you develop the test before any code such that the
first iteration of the test is that it fails because it doesn't compile.
- Add a test, based on a requirement, historical defect, etc.
- Run all tests, including the new one. Identify the tests that fail.
- Write/modify code to satisfy the tests that failed. The code
doesn't have to be elegant and it can even be redundant. For now it's
important to focus on getting the tests to pass.
- Run all tests again. If any fail move back to the previous step.
- Refactor your code in order to remove redundancy, hard coded
behavior, things that don't comply with the architecture, standards etc.
- Repeat by going back to step 2 or 1.
I'll discuss TDD in more detail in a future blog post.
I suggest that you test all Oracle code developed on greenfield projects.
Fitting Into Overall Test Policy/Strategy
There is no sense in micro optimization though the perfect partial
test policy/strategy for your Oracle code if this doesn't match the
project's overall test policy/strategy -- they need to be coordinated
and redundant efforts should be avoided in order to lower the cost.
If this is the first time that test policy/strategy are implemented
for Oracle code the overall test policy and strategy may need to be
amended to accommodate Oracle code.
During this coordination it should be considered whether we should
try to avoid repeating the same test cases at different levels. The test
policy/strategy must be agreed at the various relevant levels in the
organization and with the customer and users.
Future Blog Posts
Future blog post will cover related issues like:
- Test approach implementing the test strategy and policies we've now laid out.
- Testing effort/depth required -- how much testing do we need?
- Test design and approach tips & tricks.
- How to break dependencies through creation/usage of mock objects in automated Oracle code testing.
- How to apply Test-Driven Development to software projects using Oracle code.
- Continuous Integration.
- Code Coverage.
References
- Working Effectively with Legacy Code by Michael C. Feathers, Prentice Hall, 2004. ISBN 978-0131177055.
- Implementing Automated Software Testing by Elfriede Dustin, Thom
Garrett and Bernie Gauf, Addison Wesley, 2009. ISBN 978-0321580511.
- Foundations of Software Testing: ISTQB Certification by Dorothy
Graham, Isabel Evans, Erik Van Veenendaal and Rex Black, Cengage, 2008.
ISBN 978-1844809899.